30 research outputs found

    High-resolution quantification of stress perfusion defects by cardiac magnetic resonance

    Get PDF
    Aims Quantitative stress perfusion cardiac magnetic resonance (CMR) is becoming more widely available, but it is still unclear how to integrate this information into clinical decision-making. Typically, pixel-wise perfusion maps are generated, but diagnostic and prognostic studies have summarized perfusion as just one value per patient or in 16 myocardial segments. In this study, the reporting of quantitative perfusion maps is extended from the standard 16 segments to a high-resolution bullseye. Cut-off thresholds are established for the high-resolution bullseye, and the identified perfusion defects are compared with visual assessment. Methods and results Thirty-four patients with known or suspected coronary artery disease were retrospectively analysed. Visual perfusion defects were contoured on the CMR images and pixel-wise quantitative perfusion maps were generated. Cut-off values were established on the high-resolution bullseye consisting of 1800 points and compared with the per-segment, per-coronary, and per-patient resolution thresholds. Quantitative stress perfusion was significantly lower in visually abnormal pixels, 1.11 (0.75–1.57) vs. 2.35 (1.82–2.9) mL/min/g (Mann–Whitney U test P < 0.001), with an optimal cut-off of 1.72 mL/min/g. This was lower than the segment-wise optimal threshold of 1.92 mL/min/g. The Bland–Altman analysis showed that visual assessment underestimated large perfusion defects compared with the quantification with good agreement for smaller defect burdens. A Dice overlap of 0.68 (0.57–0.78) was found. Conclusion This study introduces a high-resolution bullseye consisting of 1800 points, rather than 16, per patient for reporting quantitative stress perfusion, which may improve sensitivity. Using this representation, the threshold required to identify areas of reduced perfusion is lower than for segmental analysis

    AI-AIF: artificial intelligence-based arterial input function for quantitative stress perfusion cardiac magnetic resonance

    Get PDF
    Aims:One of the major challenges in the quantification of myocardial blood flow (MBF) from stress perfusion cardiac magnetic resonance (CMR) is the estimation of the arterial input function (AIF). This is due to the non-linear relationship between the concentration of gadolinium and the MR signal, which leads to signal saturation. In this work, we show that a deep learning model can be trained to predict the unsaturated AIF from standard images, using the reference dual-sequence acquisition AIFs (DS-AIFs) for training.Methods and results:A 1D U-Net was trained, to take the saturated AIF from the standard images as input and predict the unsaturated AIF, using the data from 201 patients from centre 1 and a test set comprised of both an independent cohort of consecutive patients from centre 1 and an external cohort of patients from centre 2 (n = 44). Fully-automated MBF was compared between the DS-AIF and AI-AIF methods using the Mann–Whitney U test and Bland–Altman analysis. There was no statistical difference between the MBF quantified with the DS-AIF [2.77 mL/min/g (1.08)] and predicted with the AI-AIF (2.79 mL/min/g (1.08), P = 0.33. Bland–Altman analysis shows minimal bias between the DS-AIF and AI-AIF methods for quantitative MBF (bias of −0.11 mL/min/g). Additionally, the MBF diagnosis classification of the AI-AIF matched the DS-AIF in 669/704 (95%) of myocardial segments.Conclusion:Quantification of stress perfusion CMR is feasible with a single-sequence acquisition and a single contrast injection using an AI-based correction of the AIF

    Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge

    Get PDF
    The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field

    Deep learning applications in myocardial perfusion imaging, a systematic review and meta-analysis

    No full text
    BACKGROUND: Coronary artery disease (CAD) is a leading cause of death worldwide, and the diagnostic process comprises of invasive testing with coronary angiography and non-invasive imaging, in addition to history, clinical examination, and electrocardiography (ECG). A highly accurate assessment of CAD lies in perfusion imaging which is performed by myocardial perfusion scintigraphy (MPS) and magnetic resonance imaging (stress CMR). Recently deep learning has been increasingly applied on perfusion imaging for better understanding of the diagnosis, safety, and outcome of CAD. The aim of this review is to summarise the evidence behind deep learning applications in myocardial perfusion imaging. METHODS: A systematic search was performed on MEDLINE and EMBASE databases, from database inception until September 29, 2020. This included all clinical studies focusing on deep learning applications and myocardial perfusion imaging, and excluded competition conference papers, simulation and animal studies, and studies which used perfusion imaging as a variable with different focus. This was followed by review of abstracts and full texts. A meta-analysis was performed on a subgroup of studies which looked at perfusion images classification. A summary receiver-operating curve (SROC) was used to compare the performance of different models, and area under the curve (AUC) was reported. Effect size, risk of bias and heterogeneity were tested. RESULTS: 46 studies in total were identified, the majority were MPS studies (76%). The most common neural network was convolutional neural network (CNN) (41%). 13 studies (28%) looked at perfusion imaging classification using MPS, the pooled diagnostic accuracy showed AUC = 0.859. The summary receiver operating curve (SROC) comparison showed superior performance of CNN (AUC = 0.894) compared to MLP (AUC = 0.848). The funnel plot was asymmetrical, and the effect size was significantly different with p value < 0.001, indicating small studies effect and possible publication bias. There was no significant heterogeneity amongst studies according to Q test (p = 0.2184). CONCLUSION: Deep learning has shown promise to improve myocardial perfusion imaging diagnostic accuracy, prediction of patients’ events and safety. More research is required in clinical applications, to achieve better care for patients with known or suspected CAD

    Robust non-rigid motion compensation of free-breathing myocardial perfusion MRI data

    Get PDF

    Optimized automated cardiac MR scar quantification with GAN-based data augmentation

    Get PDF
    Background: The clinical utility of late gadolinium enhancement (LGE) cardiac MRI is limited by the lack of standardization, and time-consuming postprocessing. In this work, we tested the hypothesis that a cascaded deep learning pipeline trained with augmentation by synthetically generated data would improve model accuracy and robustness for automated scar quantification. Methods: A cascaded pipeline consisting of three consecutive neural networks is proposed, starting with a bounding box regression network to identify a region of interest around the left ventricular (LV) myocardium. Two further nnU-Net models are then used to segment the myocardium and, if present, scar. The models were trained on the data from the EMIDEC challenge, supplemented with an extensive synthetic dataset generated with a conditional GAN. Results: The cascaded pipeline significantly outperformed a single nnU-Net directly segmenting both the myocardium (mean Dice similarity coefficient (DSC) (standard deviation (SD)): 0.84 (0.09) vs 0.63 (0.20), p < 0.01) and scar (DSC: 0.72 (0.34) vs 0.46 (0.39), p < 0.01) on a per-slice level. The inclusion of the synthetic data as data augmentation during training improved the scar segmentation DSC by 0.06 (p < 0.01). The mean DSC per-subject on the challenge test set, for the cascaded pipeline augmented by synthetic generated data, was 0.86 (0.03) and 0.67 (0.29) for myocardium and scar, respectively. Conclusion: A cascaded deep learning-based pipeline trained with augmentation by synthetically generated data leads to myocardium and scar segmentations that are similar to the manual operator, and outperforms direct segmentation without the synthetic images
    corecore